Skip to content

[Feature][310P] support shared experts path in fused MoE for qwen3.5#7674

Open
Tflowers-0129 wants to merge 4 commits intovllm-project:mainfrom
Tflowers-0129:fix/310p-qwen35-shared-experts
Open

[Feature][310P] support shared experts path in fused MoE for qwen3.5#7674
Tflowers-0129 wants to merge 4 commits intovllm-project:mainfrom
Tflowers-0129:fix/310p-qwen35-shared-experts

Conversation

@Tflowers-0129
Copy link
Copy Markdown
Contributor

@Tflowers-0129 Tflowers-0129 commented Mar 26, 2026

What this PR does / why we need it?

310P originally supported only the Qwen3 series. Recent adaptation work for Qwen3.5 introduced the new shared-experts structure, which had not been considered on the 310P path, so this fix was made. The fix aligns the 310P execution flow with the A2/A3 implementation path.

Does this PR introduce any user-facing change?

NO

How was this patch tested?

local e2e test

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the fused Mixture-of-Experts (MoE) system to properly integrate and process shared experts, a critical feature for models like Qwen3.5. The changes ensure that the MoE layer can correctly handle the unique gating and activation requirements of shared expert configurations, particularly for the 310P Ascend environment. This improves the model's compatibility and performance with advanced MoE architectures.

Highlights

  • Shared Experts Support: Added comprehensive support for shared experts within the fused Mixture-of-Experts (MoE) implementation, specifically tailored for Qwen3.5 and Qwen3-Next models on the 310P Ascend platform.
  • Two-Part Shared Expert Processing: Introduced new internal methods, _shared_experts_part1 and _shared_experts_part2, to manage the two-stage processing of shared expert hidden states and their gating mechanisms.
  • Qwen3.5 Specific Gating: Implemented a dedicated gate path for Qwen3.5/Qwen3-Next shared experts, incorporating torch.nn.functional.sigmoid for activation in the _shared_experts_part2 method.
  • Runner Reinitialization: Modified the __init__ method to ensure the MoE runner is reinitialized after shared experts and gate configurations are set, facilitating correct custom operation dispatch for shared expert processing.
  • Internal Router Property: Defined the is_internal_router property to return False, indicating that the 310P Ascend path expects router logits from the model's forward pass rather than an internal router.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Tflowers-0129 Tflowers-0129 force-pushed the fix/310p-qwen35-shared-experts branch from 3a15471 to 8bec9e9 Compare March 26, 2026 07:18
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the FusedMoE layer for Ascend 310P by introducing new methods, _shared_experts_part1 and _shared_experts_part2, to manage the forward pass for shared experts, including specific handling for Qwen3.5/Qwen3-Next models with an expert_gate. It also imports torch.nn.functional and sets the is_internal_router property to False for the 310P Ascend path. A review comment suggests refactoring the repeated tuple unpacking logic within the newly added shared expert methods into a helper function to improve code maintainability and reduce duplication.

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@Tflowers-0129 Tflowers-0129 force-pushed the fix/310p-qwen35-shared-experts branch from 1b6091a to 4181676 Compare March 26, 2026 12:50
Signed-off-by: Tflowers-0129 <2906339855@qq.com>
Signed-off-by: Tflowers-0129 <2906339855@qq.com>
Signed-off-by: Tflowers-0129 <2906339855@qq.com>
@Tflowers-0129 Tflowers-0129 changed the title [310P] support shared experts path in fused MoE for qwen3.5 [feature][310P] support shared experts path in fused MoE for qwen3.5 Mar 29, 2026
@Tflowers-0129 Tflowers-0129 changed the title [feature][310P] support shared experts path in fused MoE for qwen3.5 [Feature][310P] support shared experts path in fused MoE for qwen3.5 Mar 29, 2026
Signed-off-by: Tflowers-0129 <2906339855@qq.com>
@Tflowers-0129 Tflowers-0129 force-pushed the fix/310p-qwen35-shared-experts branch from 778307d to a24559a Compare March 29, 2026 13:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants